Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
Clin Transl Sci ; 15(8): 1848-1855, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-36125173

RESUMO

Within clinical, biomedical, and translational science, an increasing number of projects are adopting graphs for knowledge representation. Graph-based data models elucidate the interconnectedness among core biomedical concepts, enable data structures to be easily updated, and support intuitive queries, visualizations, and inference algorithms. However, knowledge discovery across these "knowledge graphs" (KGs) has remained difficult. Data set heterogeneity and complexity; the proliferation of ad hoc data formats; poor compliance with guidelines on findability, accessibility, interoperability, and reusability; and, in particular, the lack of a universally accepted, open-access model for standardization across biomedical KGs has left the task of reconciling data sources to downstream consumers. Biolink Model is an open-source data model that can be used to formalize the relationships between data structures in translational science. It incorporates object-oriented classification and graph-oriented features. The core of the model is a set of hierarchical, interconnected classes (or categories) and relationships between them (or predicates) representing biomedical entities such as gene, disease, chemical, anatomic structure, and phenotype. The model provides class and edge attributes and associations that guide how entities should relate to one another. Here, we highlight the need for a standardized data model for KGs, describe Biolink Model, and compare it with other models. We demonstrate the utility of Biolink Model in various initiatives, including the Biomedical Data Translator Consortium and the Monarch Initiative, and show how it has supported easier integration and interoperability of biomedical KGs, bringing together knowledge from multiple sources and helping to realize the goals of translational science.


Assuntos
Reconhecimento Automatizado de Padrão , Ciência Translacional Biomédica , Conhecimento
2.
Database (Oxford) ; 20222022 05 25.
Artigo em Inglês | MEDLINE | ID: mdl-35616100

RESUMO

Despite progress in the development of standards for describing and exchanging scientific information, the lack of easy-to-use standards for mapping between different representations of the same or similar objects in different databases poses a major impediment to data integration and interoperability. Mappings often lack the metadata needed to be correctly interpreted and applied. For example, are two terms equivalent or merely related? Are they narrow or broad matches? Or are they associated in some other way? Such relationships between the mapped terms are often not documented, which leads to incorrect assumptions and makes them hard to use in scenarios that require a high degree of precision (such as diagnostics or risk prediction). Furthermore, the lack of descriptions of how mappings were done makes it hard to combine and reconcile mappings, particularly curated and automated ones. We have developed the Simple Standard for Sharing Ontological Mappings (SSSOM) which addresses these problems by: (i) Introducing a machine-readable and extensible vocabulary to describe metadata that makes imprecision, inaccuracy and incompleteness in mappings explicit. (ii) Defining an easy-to-use simple table-based format that can be integrated into existing data science pipelines without the need to parse or query ontologies, and that integrates seamlessly with Linked Data principles. (iii) Implementing open and community-driven collaborative workflows that are designed to evolve the standard continuously to address changing requirements and mapping practices. (iv) Providing reference tools and software libraries for working with the standard. In this paper, we present the SSSOM standard, describe several use cases in detail and survey some of the existing work on standardizing the exchange of mappings, with the goal of making mappings Findable, Accessible, Interoperable and Reusable (FAIR). The SSSOM specification can be found at http://w3id.org/sssom/spec. Database URL: http://w3id.org/sssom/spec.


Assuntos
Metadados , Web Semântica , Gerenciamento de Dados , Bases de Dados Factuais , Fluxo de Trabalho
3.
J Biomed Inform ; 117: 103755, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33781919

RESUMO

Resource Description Framework (RDF) is one of the three standardized data formats in the HL7 Fast Healthcare Interoperability Resources (FHIR) specification and is being used by healthcare and research organizations to join FHIR and non-FHIR data. However, RDF previously had not been integrated into popular FHIR tooling packages, hindering the adoption of FHIR RDF in the semantic web and other communities. The objective of the study is to develop and evaluate a Java based FHIR RDF data transformation toolkit to facilitate the use and validation of FHIR RDF data. We extended the popular HAPI FHIR tooling to add RDF support, thus enabling FHIR data in XML or JSON to be transformed to or from RDF. We also developed an RDF Shape Expression (ShEx)-based validation framework to verify conformance of FHIR RDF data to the ShEx schemas provided in the FHIR specification for FHIR versions R4 and R5. The effectiveness of ShEx validation was demonstrated by testing it against 2693 FHIR R4 examples and 2197 FHIR R5 examples that are included in the FHIR specification. A total of 5 types of errors including missing properties, unknown element, missing resource Type, invalid attribute value, and unknown resource name in the R5 examples were revealed, demonstrating the value of the ShEx in the quality assurance of the evolving R5 development. This FHIR RDF data transformation and validation framework, based on HAPI and ShEx, is robust and ready for community use in adopting FHIR RDF, improving FHIR data quality, and evolving the FHIR specification.


Assuntos
Atenção à Saúde , Registros Eletrônicos de Saúde
4.
J Am Med Inform Assoc ; 28(3): 427-443, 2021 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-32805036

RESUMO

OBJECTIVE: Coronavirus disease 2019 (COVID-19) poses societal challenges that require expeditious data and knowledge sharing. Though organizational clinical data are abundant, these are largely inaccessible to outside researchers. Statistical, machine learning, and causal analyses are most successful with large-scale data beyond what is available in any given organization. Here, we introduce the National COVID Cohort Collaborative (N3C), an open science community focused on analyzing patient-level data from many centers. MATERIALS AND METHODS: The Clinical and Translational Science Award Program and scientific community created N3C to overcome technical, regulatory, policy, and governance barriers to sharing and harmonizing individual-level clinical data. We developed solutions to extract, aggregate, and harmonize data across organizations and data models, and created a secure data enclave to enable efficient, transparent, and reproducible collaborative analytics. RESULTS: Organized in inclusive workstreams, we created legal agreements and governance for organizations and researchers; data extraction scripts to identify and ingest positive, negative, and possible COVID-19 cases; a data quality assurance and harmonization pipeline to create a single harmonized dataset; population of the secure data enclave with data, machine learning, and statistical analytics tools; dissemination mechanisms; and a synthetic data pilot to democratize data access. CONCLUSIONS: The N3C has demonstrated that a multisite collaborative learning health network can overcome barriers to rapidly build a scalable infrastructure incorporating multiorganizational clinical data for COVID-19 analytics. We expect this effort to save lives by enabling rapid collaboration among clinicians, researchers, and data scientists to identify treatments and specialized care and thereby reduce the immediate and long-term impacts of COVID-19.


Assuntos
COVID-19 , Ciência de Dados/organização & administração , Disseminação de Informação , Colaboração Intersetorial , Segurança Computacional , Análise de Dados , Comitês de Ética em Pesquisa , Regulamentação Governamental , Humanos , National Institutes of Health (U.S.) , Estados Unidos
5.
AMIA Annu Symp Proc ; 2020: 1140-1149, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33936490

RESUMO

This study developed and evaluated a JSON-LD 1.1 approach to automate the Resource Description Framework (RDF) serialization and deserialization of Fast Healthcare Interoperability Resources (FHIR) data, in preparation for updating the FHIR RDF standard. We first demonstrated that this JSON-LD 1.1 approach can produce the same output as the current FHIR RDF standard. We then used it to test, document and validate several proposed changes to the FHIR RDF specification, to address usability issues that were uncovered during trial use. This JSON-LD 1.1 approach was found to be effective and more declarative than the existing custom-code-based approach, in converting FHIR data from JSON to RDF and vice versa. This approach should enable future FHIR RDF servers to be implemented and maintained more easily.


Assuntos
Registros Eletrônicos de Saúde/normas , Interoperabilidade da Informação em Saúde/normas , Linguagens de Programação , Algoritmos , Atenção à Saúde , Registros Eletrônicos de Saúde/organização & administração , Instalações de Saúde , Nível Sete de Saúde , Humanos , Disseminação de Informação , Semântica
7.
J Biomed Inform ; 100S: 100002, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-34384571

RESUMO

The word "ontology" was introduced to information systems when only closed-world reasoning systems were available. It was "borrowed" from philosophy, but literal links to its philosophical meaning were explicitly disavowed. Since then, open-world reasoning systems based on description logics have been developed, OWL has become a standard, and philosophical issues have been raised. The result has too often been confusion. The question "What statements are ontological" receives a variety of answers. A clearer vocabulary that is better suited to today's information systems is needed. The project to base ICD-11 on a "Common Ontology" required addressing this confusion. This paper sets out to systematise the lessons of that experience and subsequent discussions. We explore the semantics of open-world and closed-world systems. For specifying knowledge bases and software, we propose "invariants" or, more fully, "the first order invariant part of the background domain knowledge base" as an alternative to the words "ontology" and "ontological." We discuss the role and limitations of OWL and description logics and how they are complementary to closed world systems such as frames and to less formal "knowledge organisation systems". We illustrate why the conventions of classifications such as ICD cannot be formulated directly in OWL, but can be linked to OWL knowledge bases by queries. We contend that while OWL and description logics are major advances for representing invariants and terminologies, they must be combined with other technologies to represent broader background knowledge faithfully. The ICD-11 architecture is one approach. We argue that such hybrid architectures can and should be developed further.

8.
AMIA Annu Symp Proc ; 2018: 979-988, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30815141

RESUMO

HL7 Fast Healthcare Information Resources (FHIR) is rapidly becoming the de-facto standard for the exchange of clinical and healthcare related information. Major EHR vendors and healthcare providers are actively developing transformations between existing EHR databases and their corresponding FHIR representation. Many of these organizations are concurrently creating a second set of transformations from the same sources into integrated data repositories (IDRs). Considerable cost savings could be realized and overall quality could be improved were it possible to transformation primary FHIR EHR data directly into an IDR. We developed a FHIR to i2b2 transformation toolkit and evaluated the viability of such an approach.


Assuntos
Data Warehousing , Conjuntos de Dados como Assunto , Registros Eletrônicos de Saúde/normas , Interoperabilidade da Informação em Saúde/normas , Nível Sete de Saúde , Ontologias Biológicas , Humanos , Software
9.
AMIA Jt Summits Transl Sci Proc ; 2017: 259-267, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28815140

RESUMO

In this paper, we present a platform known as D2Refine for facilitating clinical research study data element harmonization and standardization. D2Refine is developed on top of OpenRefine (formerly Google Refine) and leverages simple interface and extensible architecture of OpenRefine. D2Refine empowers the tabular representation of clinical research study data element definitions by allowing it to be easily organized and standardized using reconciliation services. D2Refine builds on valuable built-in data transformation features of OpenRefine to bring source data sets to a finer state quickly. We implemented the reconciliation services and search capabilities based on the standard Common Terminology Services 2 (CTS2) and the serialization of clinical research study data element definitions into standard representation using clinical information modeling technology for semantic interoperability. We demonstrate that D2Refine is a useful and promising platform that would help address the emergent needs for clinical research study data element harmonization and standardization.

10.
J Biomed Semantics ; 8(1): 19, 2017 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-28583204

RESUMO

BACKGROUND: Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. METHODS: We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). RESULTS: We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. CONCLUSION: Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.


Assuntos
Genômica/métodos , Metadados , Neoplasias/genética , Web Semântica , Humanos
11.
J Biomed Inform ; 67: 90-100, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28213144

RESUMO

BACKGROUND: HL7 Fast Healthcare Interoperability Resources (FHIR) is an emerging open standard for the exchange of electronic healthcare information. FHIR resources are defined in a specialized modeling language. FHIR instances can currently be represented in either XML or JSON. The FHIR and Semantic Web communities are developing a third FHIR instance representation format in Resource Description Framework (RDF). Shape Expressions (ShEx), a formal RDF data constraint language, is a candidate for describing and validating the FHIR RDF representation. OBJECTIVE: Create a FHIR to ShEx model transformation and assess its ability to describe and validate FHIR RDF data. METHODS: We created the methods and tools that generate the ShEx schemas modeling the FHIR to RDF specification being developed by HL7 ITS/W3C RDF Task Force, and evaluated the applicability of ShEx in the description and validation of FHIR to RDF transformations. RESULTS: The ShEx models contributed significantly to workgroup consensus. Algorithmic transformations from the FHIR model to ShEx schemas and FHIR example data to RDF transformations were incorporated into the FHIR build process. ShEx schemas representing 109 FHIR resources were used to validate 511 FHIR RDF data examples from the Standards for Trial Use (STU 3) Ballot version. We were able to uncover unresolved issues in the FHIR to RDF specification and detect 10 types of errors and root causes in the actual implementation. The FHIR ShEx representations have been included in the official FHIR web pages for the STU 3 Ballot version since September 2016. DISCUSSION: ShEx can be used to define and validate the syntax of a FHIR resource, which is complementary to the use of RDF Schema (RDFS) and Web Ontology Language (OWL) for semantic validation. CONCLUSION: ShEx proved useful for describing a standard model of FHIR RDF data. The combination of a formal model and a succinct format enabled comprehensive review and automated validation.


Assuntos
Algoritmos , Internet , Semântica , Registros Eletrônicos de Saúde , Humanos
12.
Stud Health Technol Inform ; 245: 887-891, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29295227

RESUMO

A variety of data models have been developed to provide a standardized data interface that supports organizing clinical research data into a standard structure for building the integrated data repositories. HL7 Fast Healthcare Interoperability Resources (FHIR) is emerging as a next generation standards framework for facilitating health care and electronic health records-based data exchange. The objective of the study was to design and assess a consensus-based approach for harmonizing the OHDSI CDM with HL7 FHIR. We leverage a FHIR W5 (Who, What, When, Where, and Why) Classification System for designing the harmonization approaches and assess their utility in achieving the consensus among curators using a standard inter-rater agreement measure. Moderate agreement was achieved for the model-level harmonization (kappa = 0.50) whereas only fair agreement was achieved for the property-level harmonization (kappa = 0.21). FHIR W5 is a useful tool in designing the harmonization approaches between data models and FHIR, and facilitating the consensus achievement.


Assuntos
Consenso , Registros Eletrônicos de Saúde , Humanos
13.
Stud Health Technol Inform ; 245: 1327, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29295408

RESUMO

The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.


Assuntos
Registros Eletrônicos de Saúde , Vocabulário Controlado , Humanos , Vocabulário
14.
Stud Health Technol Inform ; 228: 431-5, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27577419

RESUMO

It is investigated whether the content of the Joint Linearization for Mortality and Morbidity Statistics of the 11th ICD revision can be semantically represented by formalisms acting on the clinical terminology SNOMED CT, viz. the IHTSDO Compositional Grammar (CG) and the Expression Constraint Language (ECL). Whereas CG provides a composition syntax for building coordinated SNOMED CT expressions, ECL provides a powerful query mechanism. Both formalisms can be leveraged to guarantee inter-operation between an ontology-based terminology like SNOMED CT and a statistical classification like ICD, characterized by single hierarchies and exhaustive, mutually exclusive classes. We test the feasibility of the method on the circulatory chapter of ICD-11 JLMMS.


Assuntos
Classificação Internacional de Doenças/estatística & dados numéricos , Systematized Nomenclature of Medicine , Doenças Cardiovasculares/classificação , Humanos , Linguística , Terminologia como Assunto
15.
J Biomed Inform ; 62: 232-42, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27392645

RESUMO

The Quality Data Model (QDM) is an information model developed by the National Quality Forum for representing electronic health record (EHR)-based electronic clinical quality measures (eCQMs). In conjunction with the HL7 Health Quality Measures Format (HQMF), QDM contains core elements that make it a promising model for representing EHR-driven phenotype algorithms for clinical research. However, the current QDM specification is available only as descriptive documents suitable for human readability and interpretation, but not for machine consumption. The objective of the present study is to develop and evaluate a data element repository (DER) for providing machine-readable QDM data element service APIs to support phenotype algorithm authoring and execution. We used the ISO/IEC 11179 metadata standard to capture the structure for each data element, and leverage Semantic Web technologies to facilitate semantic representation of these metadata. We observed there are a number of underspecified areas in the QDM, including the lack of model constraints and pre-defined value sets. We propose a harmonization with the models developed in HL7 Fast Healthcare Interoperability Resources (FHIR) and Clinical Information Modeling Initiatives (CIMI) to enhance the QDM specification and enable the extensibility and better coverage of the DER. We also compared the DER with the existing QDM implementation utilized within the Measure Authoring Tool (MAT) to demonstrate the scalability and extensibility of our DER-based approach.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Fenótipo , Pesquisa Biomédica , Bases de Dados Factuais , Humanos , Semântica
16.
Stud Health Technol Inform ; 223: 267-72, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27139413

RESUMO

The goal of this work is to contribute to a smooth and semantically sound inter-operability between the ICD-11 (International Classification of Diseases-11th revision Joint Linearization for Mortality, Morbidity and Statistics) and SNOMED CT (SCT). To guarantee such inter-operation between a classification, characterized by a single hierarchy of mutually exclusive and exhaustive classes, as is the JLMMS successor of ICD-10 on the one hand, and the multi-hierarchical, ontology-based clinical terminology SCT on the other hand, we use ontology axioms that logically express generalizable truths. This is expressed by the compositional grammar of SCT, together with queries on axiomsof SCT. We test the feasibility of the method on the circulatory chapter of ICD-11 JLMMS and present limitations and results.


Assuntos
Classificação Internacional de Doenças/normas , Systematized Nomenclature of Medicine , Linguística
17.
J Biomed Semantics ; 7: 10, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26949508

RESUMO

BACKGROUND: The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. METHODS: We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. RESULTS: A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). CONCLUSIONS: Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.


Assuntos
Internet , Informática Médica/métodos , Informática Médica/normas , Semântica , Pesquisa Biomédica , Humanos , Modelos Teóricos , Padrões de Referência
18.
J Am Med Inform Assoc ; 23(2): 248-56, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26568604

RESUMO

OBJECTIVE: The objective of the Strategic Health IT Advanced Research Project area four (SHARPn) was to develop open-source tools that could be used for the normalization of electronic health record (EHR) data for secondary use--specifically, for high throughput phenotyping. We describe the role of Intermountain Healthcare's Clinical Element Models ([CEMs] Intermountain Healthcare Health Services, Inc, Salt Lake City, Utah) as normalization "targets" within the project. MATERIALS AND METHODS: Intermountain's CEMs were either repurposed or created for the SHARPn project. A CEM describes "valid" structure and semantics for a particular kind of clinical data. CEMs are expressed in a computable syntax that can be compiled into implementation artifacts. The modeling team and SHARPn colleagues agilely gathered requirements and developed and refined models. RESULTS: Twenty-eight "statement" models (analogous to "classes") and numerous "component" CEMs and their associated terminology were repurposed or developed to satisfy SHARPn high throughput phenotyping requirements. Model (structural) mappings and terminology (semantic) mappings were also created. Source data instances were normalized to CEM-conformant data and stored in CEM instance databases. A model browser and request site were built to facilitate the development. DISCUSSION: The modeling efforts demonstrated the need to address context differences and granularity choices and highlighted the inevitability of iso-semantic models. The need for content expertise and "intelligent" content tooling was also underscored. We discuss scalability and sustainability expectations for a CEM-based approach and describe the place of CEMs relative to other current efforts. CONCLUSIONS: The SHARPn effort demonstrated the normalization and secondary use of EHR data. CEMs proved capable of capturing data originating from a variety of sources within the normalization pipeline and serving as suitable normalization targets.


Assuntos
Registros Eletrônicos de Saúde/normas , Armazenamento e Recuperação da Informação , Registro Médico Coordenado/métodos , Sistemas de Informação em Saúde/normas , Semântica , Utah , Vocabulário Controlado
19.
AMIA Annu Symp Proc ; 2016: 1119-1128, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28269909

RESUMO

Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary's metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration.


Assuntos
Pesquisa Biomédica/normas , Bases de Dados Factuais/normas , Metadados/normas , Software
20.
Stud Health Technol Inform ; 216: 1098, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26262397

RESUMO

This study describes our efforts in developing a standards-based semantic metadata repository for supporting electronic health record (EHR)-driven phenotype authoring and execution. Our system comprises three layers: 1) a semantic data element repository layer; 2) a semantic services layer; and 3) a phenotype application layer. In a prototype implementation, we developed the repository and services through integrating the data elements from both Quality Data Model (QDM) and HL7 Fast Healthcare Inteoroperability Resources (FHIR) models. We discuss the modeling challenges and the potential of our system to support EHR phenotype authoring and execution applications.


Assuntos
Bases de Dados Factuais/normas , Registros Eletrônicos de Saúde/normas , Nível Sete de Saúde/normas , Semântica , Vocabulário Controlado , Guias como Assunto , Registro Médico Coordenado/normas , Processamento de Linguagem Natural , Estados Unidos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...